Benchmarking VLMs’ Reasoning About Persuasive Atypical Images
Sina Malakouti*
Aysan Aghazadeh*
Ashmit Khandelwal
Adriana Kovashka
[Paper]
[GitHub]
[Slides]
[Bibtex]

Abstract

Vision language models (VLMs) have shown strong zero-shot generalization across various tasks, especially when integrated with large language models (LLMs). However, their ability to comprehend rhetorical and persuasive visual media, such as advertisements, remains understudied. Ads often employ atypical imagery, using surprising object juxtapositions to convey shared properties.

We introduce three novel tasks, Multi-label Atypicality Classification, Atypicality Statement Retrieval, and Atypical Object Recognition, to benchmark VLMs' understanding of atypicality in persuasive images. We evaluate how well VLMs use atypicality to infer an ad's message and test their reasoning abilities by employingsemantically challenging negatives. Finally, we pioneer atypicality-aware verbalization by extracting comprehensive image descriptions sensitive to atypical elements.

Findings reveal that: (1) VLMs lack advanced reasoning capabilities compared to LLMs; (2) simple, effective strategies can extract atypicality-aware information, leading to comprehensive image verbalization; (3) atypicality aids persuasive ad understanding.


Method



Acknowledgements

This research was supported in part by the University of Pittsburgh Center for Research Computing, RRID:\textunderscore SCR\_022735, through the resources provided. Specifically, this work used the H2P cluster, which is supported by NSF award number OAC-2117681. This work was also supported by National Science Foundation Grants No. 2006885. We gratefully acknowledge the support of those who contributed to the human evaluation of this work.

This template was originally made by Phillip Isola and Richard Zhang.